814 research outputs found

    Mapping Shape to Visuomotor Mapping: Learning and Generalisation of Sensorimotor Behaviour Based on Contextual Information

    Get PDF
    Humans can learn and store multiple visuomotor mappings (dual-adaptation) when feedback for each is provided alternately. Moreover, learned context cues associated with each mapping can be used to switch between the stored mappings. However, little is known about the associative learning between cue and required visuomotor mapping, and how learning generalises to novel but similar conditions. To investigate these questions, participants performed a rapid target-pointing task while we manipulated the offset between visual feedback and movement end-points. The visual feedback was presented with horizontal offsets of different amounts, dependent on the targets shape. Participants thus needed to use different visuomotor mappings between target location and required motor response depending on the target shape in order to ?hit? it. The target shapes were taken from a continuous set of shapes, morphed between spiky and circular shapes. After training we tested participants performance, without feedback, on different target shapes that had not been learned previously. We compared two hypotheses. First, we hypothesised that participants could (explicitly) extract the linear relationship between target shape and visuomotor mapping and generalise accordingly. Second, using previous findings of visuomotor learning, we developed a (implicit) Bayesian learning model that predicts generalisation that is more consistent with categorisation (i.e. use one mapping or the other). The experimental results show that, although learning the associations requires explicit awareness of the cues? role, participants apply the mapping corresponding to the trained shape that is most similar to the current one, consistent with the Bayesian learning model. Furthermore, the Bayesian learning model predicts that learning should slow down with increased numbers of training pairs, which was confirmed by the present results. In short, we found a good correspondence between the Bayesian learning model and the empirical results indicating that this model poses a possible mechanism for simultaneously learning multiple visuomotor mappings

    Multisensory causal inference in the brain

    Get PDF
    At any given moment, our brain processes multiple inputs from its different sensory modalities (vision, hearing, touch, etc.). In deciphering this array of sensory information, the brain has to solve two problems: (1) which of the inputs originate from the same object and should be integrated and (2) for the sensations originating from the same object, how best to integrate them. Recent behavioural studies suggest that the human brain solves these problems using optimal probabilistic inference, known as Bayesian causal inference. However, how and where the underlying computations are carried out in the brain have remained unknown. By combining neuroimaging-based decoding techniques and computational modelling of behavioural data, a new study now sheds light on how multisensory causal inference maps onto specific brain areas. The results suggest that the complexity of neural computations increases along the visual hierarchy and link specific components of the causal inference process with specific visual and parietal regions

    The Change in Fingertip Contact Area as a Novel Proprioceptive Cue

    Get PDF
    Humans, many animals, and certain robotic hands have deformable fingertip pads [1, 2]. Deformable pads have the advantage of conforming to the objects that are being touched, ensuring a stable grasp for a large range of forces and shapes. Pad deformations change with finger displacements during touch. Pushing a finger against an external surface typically provokes an increase of the gross contact area [3], potentially providing a relative motion cue, a situation comparable to looming in vision [4]. The rate of increase of the area of contact also depends on the compliance of the object [5]. Because objects normally do not suddenly change compliance, participants may interpret an artificially induced variation in compliance, which coincides with a change in the gross contact area, as a change in finger displacement, and consequently they may misestimate their finger’s position relative to the touched object. To test this, we asked participants to compare the perceived displacements of their finger while contacting an object varying pseudo-randomly in compliance from trial to trial. Results indicate a bias in the perception of finger displacement induced by the change in compliance, hence in contact area, indicating that participants interpreted the altered cutaneous input as a cue to proprioception. This situation highlights the capacity of the brain to take advantage of knowledge of the mechanical properties of the body and of the external environment

    Submonolayer Epitaxy Without A Critical Nucleus

    Full text link
    The nucleation and growth of two--dimensional islands is studied with Monte Carlo simulations of a pair--bond solid--on--solid model of epitaxial growth. The conventional description of this problem in terms of a well--defined critical island size fails because no islands are absolutely stable against single atom detachment by thermal bond breaking. When two--bond scission is negligible, we find that the ratio of the dimer dissociation rate to the rate of adatom capture by dimers uniquely indexes both the island size distribution scaling function and the dependence of the island density on the flux and the substrate temperature. Effective pair-bond model parameters are found that yield excellent quantitative agreement with scaling functions measured for Fe/Fe(001).Comment: 8 pages, Postscript files (the paper and Figs. 1-3), uuencoded, compressed and tarred. Surface Science Letters, in press

    Multi-Timescale Perceptual History Resolves Visual Ambiguity

    Get PDF
    When visual input is inconclusive, does previous experience aid the visual system in attaining an accurate perceptual interpretation? Prolonged viewing of a visually ambiguous stimulus causes perception to alternate between conflicting interpretations. When viewed intermittently, however, ambiguous stimuli tend to evoke the same percept on many consecutive presentations. This perceptual stabilization has been suggested to reflect persistence of the most recent percept throughout the blank that separates two presentations. Here we show that the memory trace that causes stabilization reflects not just the latest percept, but perception during a much longer period. That is, the choice between competing percepts at stimulus reappearance is determined by an elaborate history of prior perception. Specifically, we demonstrate a seconds-long influence of the latest percept, as well as a more persistent influence based on the relative proportion of dominance during a preceding period of at least one minute. In case short-term perceptual history and long-term perceptual history are opposed (because perception has recently switched after prolonged stabilization), the long-term influence recovers after the effect of the latest percept has worn off, indicating independence between time scales. We accommodate these results by adding two positive adaptation terms, one with a short time constant and one with a long time constant, to a standard model of perceptual switching

    Multisensory Oddity Detection as Bayesian Inference

    Get PDF
    A key goal for the perceptual system is to optimally combine information from all the senses that may be available in order to develop the most accurate and unified picture possible of the outside world. The contemporary theoretical framework of ideal observer maximum likelihood integration (MLI) has been highly successful in modelling how the human brain combines information from a variety of different sensory modalities. However, in various recent experiments involving multisensory stimuli of uncertain correspondence, MLI breaks down as a successful model of sensory combination. Within the paradigm of direct stimulus estimation, perceptual models which use Bayesian inference to resolve correspondence have recently been shown to generalize successfully to these cases where MLI fails. This approach has been known variously as model inference, causal inference or structure inference. In this paper, we examine causal uncertainty in another important class of multi-sensory perception paradigm – that of oddity detection and demonstrate how a Bayesian ideal observer also treats oddity detection as a structure inference problem. We validate this approach by showing that it provides an intuitive and quantitative explanation of an important pair of multi-sensory oddity detection experiments – involving cues across and within modalities – for which MLI previously failed dramatically, allowing a novel unifying treatment of within and cross modal multisensory perception. Our successful application of structure inference models to the new ‘oddity detection’ paradigm, and the resultant unified explanation of across and within modality cases provide further evidence to suggest that structure inference may be a commonly evolved principle for combining perceptual information in the brain

    The threshold for the McGurk effect in audio-visual noise decreases with development

    Get PDF
    Across development, vision increasingly infuences audio-visual perception. This is evidenced in illusions such as the McGurk efect, in which a seen mouth movement changes the perceived sound. The current paper assessed the efects of manipulating the clarity of the heard and seen signal upon the McGurk efect in children aged 3–6 (n=29), 7–9 (n=32) and 10–12 (n=29) years, and adults aged 20–35 years (n=32). Auditory noise increased, and visual blur decreased, the likelihood of vision changing auditory perception. Based upon a proposed developmental shift from auditory to visual dominance we predicted that younger children would be less susceptible to McGurk responses, and that adults would continue to be infuenced by vision in higher levels of visual noise and with less auditory noise. Susceptibility to the McGurk efect was higher in adults compared with 3–6-year-olds and 7–9-yearolds but not 10–12-year-olds. Younger children required more auditory noise, and less visual noise, than adults to induce McGurk responses (i.e. adults and older children were more easily infuenced by vision). Reduced susceptibility in childhood supports the theory that sensory dominance shifts across development and reaches adult-like levels by 10 years of age

    Anomalous Dimension and Spatial Correlations in a Point-Island Model

    Full text link
    We examine the island size distribution function and spatial correlation function of a model for island growth in the submonolayer regime in both 1 and 2 dimensions. In our model the islands do not grow in shape, and a fixed number of adatoms are added, nucleate, and are trapped at islands as they diffuse. We study the cases of various critical island sizes ii for nucleation as a function of initial coverage. We found anomalous scaling of the island size distribution for large ii . Using scaling, random walk theory, a version of mean-field theory we obtain a closed form for the spatial correlation function. Our analytic results are verified by Monte Carlo simulations

    Spatially valid proprioceptive cues improve the detection of a visual stimulus

    Get PDF
    Vision and proprioception are the main sensory modalities that convey hand location and direction of movement. Fusion of these sensory signals into a single robust percept is now well documented. However, it is not known whether these modalities also interact in the spatial allocation of attention, which has been demonstrated for other modality pairings. The aim of this study was to test whether proprioceptive signals can spatially cue a visual target to improve its detection. Participants were instructed to use a planar manipulandum in a forward reaching action and determine during this movement whether a near-threshold visual target appeared at either of two lateral positions. The target presentation was followed by a masking stimulus, which made its possible location unambiguous, but not its presence. Proprioceptive cues were given by applying a brief lateral force to the participant’s arm, either in the same direction (validly cued) or in the opposite direction (invalidly cued) to the on-screen location of the mask. The d′ detection rate of the target increased when the direction of proprioceptive stimulus was compatible with the location of the visual target compared to when it was incompatible. These results suggest that proprioception influences the allocation of attention in visual spac
    corecore